15 research outputs found

    Exploring self-interruptions as a strategy for regaining the attention of distracted users

    Get PDF
    Carlmeyer B, Schlangen D, Wrede B. Exploring self-interruptions as a strategy for regaining the attention of distracted users. In: Proceedings of the 1st Workshop on Embodied Interaction with Smart Environments - EISE '16. New York, NY: Association for Computing Machinery (ACM); 2016: 1

    "Look at Me!": Self-Interruptions as Attention Booster?

    Get PDF
    Carlmeyer B, Schlangen D, Wrede B. "Look at Me!": Self-Interruptions as Attention Booster? In: Proceedings of the Fourth International Conference on Human Agent Interaction - HAI '16. Singapore: Association for Computing Machinery (ACM); 2016

    Ready for the Next Step?: Investigating the Effect of Incremental Information Presentation in an Object Fetching Task

    Get PDF
    Chromik M, Carlmeyer B, Wrede B. Ready for the Next Step?: Investigating the Effect of Incremental Information Presentation in an Object Fetching Task. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17. New York, NY: Association for Computing Machinery (ACM); 2017: 1

    Towards Closed Feedback Loops in HRI

    Get PDF
    Carlmeyer B, Schlangen D, Wrede B. Towards Closed Feedback Loops in HRI. In: Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction - MMRWHRI '14. New York, NY: Association for Computing Machinery (ACM); 2014: 1

    Interactive Hesitation Synthesis: Modelling and Evaluation

    Get PDF
    Betz S, Carlmeyer B, Wagner P, Wrede B. Interactive Hesitation Synthesis: Modelling and Evaluation. Multimodal Technologies and Interaction. 2018;2(1): 9

    Interaction Guidelines for Personal Voice Assistants in Smart Homes

    Get PDF
    Huxohl T, Pohling M, Carlmeyer B, Wrede B, Hermann T. Interaction Guidelines for Personal Voice Assistants in Smart Homes. In: 2019 International Conference on Speech Technology and Human-Computer Dialogue (SpeD). Piscataway, NJ: IEEE; 2019: 1-10.The use of Personal Voice Assistants (PVAs) such as Alexa and the Google Assistant is rising steadily, but there is a lack of research investigating common issues and requests of PVAs in the context of smart home control. We address this research question with an online survey (n = 65), using a qualitative evaluation of users’ problems and improvement requests. Our analysis leads to a partly hierarchic clustering of issues & recommendations for interaction capabilities of PVAs into seven basic categories, allowing us in turn to derive implications and to condense them into design guidelines for future Human-Agent Interaction (HAI) with PVAs. Specifically, we formulate and elaborate the concepts Authentication & Authorization, Activity-Based Interaction, Situated Dialogue, and Explainability & Transparency as key topics for making progress towards smooth interaction with smart homes

    The Hesitating Robot - Implementation and First Impressions

    Get PDF
    Carlmeyer B, Betz S, Wagner P, Wrede B, Schlangen D. The Hesitating Robot - Implementation and First Impressions. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18. New York, NY, USA: ACM Press; 2018: 77-78

    Are you talking to me? Improving the robustness of dialogue systems in a multi party HRI scenario by incorporating gaze direction and lip movement of attendees

    Get PDF
    Richter V, Carlmeyer B, Lier F, et al. Are you talking to me? Improving the robustness of dialogue systems in a multi party HRI scenario by incorporating gaze direction and lip movement of attendees. In: Proceedings of the Fourth International Conference on Human-agent Interaction. Proceedings of the Fourth International Conference on Human-agent Interaction. Singapore: ACM Digital Library; 2016.In this paper we present our humanoid robot “Meka”, partici- pating in a multi party human robot dialogue scenario. Active arbitration of the robot's attention based-on multi-modal stim- uli is utilised to attain persons which are outside of the robots field of view. We investigate the impact of this attention management and an addressee recognition on the robot's capability to distinguish utterances directed at it from communication between humans. Based on the results of a user study, we show that mutual gaze at the end of an utterance, as a means of yielding a turn, is a substantial cue for addressee recognition. Verification of a speaker through the detection of lip movements can be used to further increase precision. Further- more, we show that even a rather simplistic fusion of gaze and lip movement cues allows a considerable enhancement in addressee estimation, and can be altered to adapt to the requirements of a particular scenario

    How to address smart homes with a social robot? A multi-modal corpus of user interactions with an intelligent environment

    Get PDF
    Holthaus P, Leichsenring C, Bernotat J, et al. How to address smart homes with a social robot? A multi-modal corpus of user interactions with an intelligent environment. In: Calzolari N, ed. LREC 2016, Tenth International Conference on Language Resources and Evaluation. [Proceedings]. Paris: European Language Resources Association (ELRA); 2016: 3440-3446.In order to explore intuitive verbal and non-verbal interfaces in smart environments we recorded user interactions with an intelligent apartment. Besides offering various interactive capabilities itself, the apartment is also inhabited by a social robot that is available as a humanoid interface. This paper presents a multi-modal corpus that contains goal-directed actions of naive users in attempts to solve a number of predefined tasks. Alongside audio and video recordings, our data-set consists of large amount of temporally aligned sensory data and system behavior provided by the environment and its interactive components. Non-verbal system responses such as changes in light or display contents, as well as robot and apartment utterances and gestures serve as a rich basis for later in-depth analysis. Manual annotations provide further information about meta data like the current course of study and user behavior including the incorporated modality, all literal utterances, language features, emotional expressions, foci of attention, and addressees

    Welcome to the future – How naïve users intuitively address an intelligent robotics apartment.

    Get PDF
    Bernotat J, Schiffhauer B, Eyssel FA, et al. Welcome to the future – How naïve users intuitively address an intelligent robotics apartment. In: Agah A, Cabibihan JJ, Howard AM, Salichs MA, He H, eds. Lecture Notes in Artificial Intelligence (LNAI). Vol 9979. Heidelberg/ Berlin: Springer; 2016: 982-992
    corecore